Goto

Collaborating Authors

 task class




Compositional Automata Embeddings for Goal-Conditioned Reinforcement Learning

Yalcinkaya, Beyazit, Lauffer, Niklas, Vazquez-Chanlatte, Marcell, Seshia, Sanjit A.

arXiv.org Artificial Intelligence

Goal-conditioned reinforcement learning is a powerful way to control an AI agent's behavior at runtime. That said, popular goal representations, e.g., target states or natural language, are either limited to Markovian tasks or rely on ambiguous task semantics. We propose representing temporal goals using compositions of deterministic finite automata (cDFAs) and use cDFAs to guide RL agents. cDFAs balance the need for formal temporal semantics with ease of interpretation: if one can understand a flow chart, one can understand a cDFA. On the other hand, cDFAs form a countably infinite concept class with Boolean semantics, and subtle changes to the automaton can result in very different tasks, making them difficult to condition agent behavior on. To address this, we observe that all paths through a DFA correspond to a series of reach-avoid tasks and propose pre-training graph neural network embeddings on "reach-avoid derived" DFAs. Through empirical evaluation, we demonstrate that the proposed pre-training method enables zero-shot generalization to various cDFA task classes and accelerated policy specialization without the myopic suboptimality of hierarchical methods.


Meta-Learning Conjugate Priors for Few-Shot Bayesian Optimization

Plug, Ruduan

arXiv.org Artificial Intelligence

Bayesian Optimization is methodology used in statistical modelling that utilizes a Gaussian process prior distribution to iteratively update a posterior distribution towards the true distribution of the data. Finding unbiased informative priors to sample from is challenging and can greatly influence the outcome on the posterior distribution if only few data are available. In this paper we propose a novel approach to utilize meta-learning to automate the estimation of informative conjugate prior distributions given a distribution class. From this process we generate priors that require only few data to estimate the shape parameters of the original distribution of the data.


Training few-shot classification via the perspective of minibatch and pretraining

Huang, Meiyu, Xiang, Xueshuang, Xu, Yao

arXiv.org Machine Learning

Few-shot classification is a challenging task which aims to formulate the ability of humans to learn concepts from limited prior data and has drawn considerable attention in machine learning. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained to learn the ability of handling classification tasks on extremely large or infinite episodes representing different classification task, each with a small labeled support set and its corresponding query set. In this work, we advance this few-shot classification paradigm by formulating it as a supervised classification learning problem. We further propose multi-episode and cross-way training techniques, which respectively correspond to the minibatch and pretraining in classification problems. Experimental results on a state-of-the-art few-shot classification method (prototypical networks) demonstrate that both the proposed training strategies can highly accelerate the training process without accuracy loss for varying few-shot classification problems on Omniglot and miniImageNet.